19 research outputs found

    Robotics at NASA and Beyond

    Get PDF
    No abstract availabl

    Towards Supervising Remote Dexterous Robots Across Time Delay

    Get PDF
    The President s Vision for Space Exploration, laid out in 2004, relies heavily upon robotic exploration of the lunar surface in early phases of the program. Prior to the arrival of astronauts on the lunar surface, these robots will be required to be controlled across space and time, posing a considerable challenge for traditional telepresence techniques. Because time delays will be measured in seconds, not minutes as is the case for Mars Exploration, uploading the plan for a day seems excessive. An approach for controlling dexterous robots under intermediate time delay is presented, in which software running within a ground control cockpit predicts the intention of an immersed robot supervisor, then the remote robot autonomously executes the supervisor s intended tasks. Initial results are presented

    Supervising Remote Humanoids Across Intermediate Time Delay

    Get PDF
    The President's Vision for Space Exploration, laid out in 2004, relies heavily upon robotic exploration of the lunar surface in early phases of the program. Prior to the arrival of astronauts on the lunar surface, these robots will be required to be controlled across space and time, posing a considerable challenge for traditional telepresence techniques. Because time delays will be measured in seconds, not minutes as is the case for Mars Exploration, uploading the plan for a day seems excessive. An approach for controlling humanoids under intermediate time delay is presented. This approach uses software running within a ground control cockpit to predict an immersed robot supervisor's motions which the remote humanoid autonomously executes. Initial results are presented

    ROS wrapper for real-time multi-person pose estimation with a single camera

    Get PDF
    For robots to be deployable in human occupied environments, the robots must have human-awareness and generate human-aware behaviors and policies. OpenPose is a library for real-time multi-person keypoint detection. We have considered the implementation of a ROS package that would allow the estimation of 2d pose from simple RGB images, for which we have introduced a ROS wrapper that automatically recovers the pose of several people from a single camera using OpenPose. Additionally, a ROS node to obtain 3d pose estimation from the initial 2d pose estimation when a depth image is synchronized with the RGB image (RGB-D image, such as with a Kinect camera) has been developed. This aim is attained projecting the 2d pose estimation onto the point-cloud of the depth image.Peer ReviewedPreprin

    Forming Human-Robot Teams Across Time and Space

    Get PDF
    NASA pushes telerobotics to distances that span the Solar System. At this scale, time of flight for communication is limited by the speed of light, inducing long time delays, narrow bandwidth and the real risk of data disruption. NASA also supports missions where humans are in direct contact with robots during extravehicular activity (EVA), giving a range of zero to hundreds of millions of miles for NASA s definition of "tele". . Another temporal variable is mission phasing. NASA missions are now being considered that combine early robotic phases with later human arrival, then transition back to robot only operations. Robots can preposition, scout, sample or construct in advance of human teammates, transition to assistant roles when the crew are present, and then become care-takers when the crew returns to Earth. This paper will describe advances in robot safety and command interaction approaches developed to form effective human-robot teams, overcoming challenges of time delay and adapting as the team transitions from robot only to robots and crew. The work is predicated on the idea that when robots are alone in space, they are still part of a human-robot team acting as surrogates for people back on Earth or in other distant locations. Software, interaction modes and control methods will be described that can operate robots in all these conditions. A novel control mode for operating robots across time delay was developed using a graphical simulation on the human side of the communication, allowing a remote supervisor to drive and command a robot in simulation with no time delay, then monitor progress of the actual robot as data returns from the round trip to and from the robot. Since the robot must be responsible for safety out to at least the round trip time period, the authors developed a multi layer safety system able to detect and protect the robot and people in its workspace. This safety system is also running when humans are in direct contact with the robot, so it involves both internal fault detection as well as force sensing for unintended external contacts. The designs for the supervisory command mode and the redundant safety system will be described. Specific implementations were developed and test results will be reported. Experiments were conducted using terrestrial analogs for deep space missions, where time delays were artificially added to emulate the longer distances found in space

    Towards Autonomous Operations of the Robonaut 2 Humanoid Robotic Testbed

    Get PDF
    The Robonaut project has been conducting research in robotics technology on board the International Space Station (ISS) since 2012. Recently, the original upper body humanoid robot was upgraded by the addition of two climbing manipulators ("legs"), more capable processors, and new sensors, as shown in Figure 1. While Robonaut 2 (R2) has been working through checkout exercises on orbit following the upgrade, technology development on the ground has continued to advance. Through the Active Reduced Gravity Offload System (ARGOS), the Robonaut team has been able to develop technologies that will enable full operation of the robotic testbed on orbit using similar robots located at the Johnson Space Center. Once these technologies have been vetted in this way, they will be implemented and tested on the R2 unit on board the ISS. The goal of this work is to create a fully-featured robotics research platform on board the ISS to increase the technology readiness level of technologies that will aid in future exploration missions. Technology development has thus far followed two main paths, autonomous climbing and efficient tool manipulation. Central to both technologies has been the incorporation of a human robotic interaction paradigm that involves the visualization of sensory and pre-planned command data with models of the robot and its environment. Figure 2 shows screenshots of these interactive tools, built in rviz, that are used to develop and implement these technologies on R2. Robonaut 2 is designed to move along the handrails and seat track around the US lab inside the ISS. This is difficult for many reasons, namely the environment is cluttered and constrained, the robot has many degrees of freedom (DOF) it can utilize for climbing, and remote commanding for precision tasks such as grasping handrails is time-consuming and difficult. Because of this, it is important to develop the technologies needed to allow the robot to reach operator-specified positions as autonomously as possible. The most important progress in this area has been the work towards efficient path planning for high DOF, highly constrained systems. Other advances include machine vision algorithms for localizing and automatically docking with handrails, the ability of the operator to place obstacles in the robot's virtual environment, autonomous obstacle avoidance techniques, and constraint management

    Future Exploration Missions' Tasks Associated with the Risk of Inadequate Design of Human and Automation/Robotic Integration

    Get PDF
    NASA's Human Research Program (HRP) funds research efforts aimed at mitigating various human health and performance risks, including the Risk of Inadequate Design of Human and Automation/Robotic Integration (HARI). As such, within HRP, the Human Factors and Behavioral Performance (HFBP) Element tasked an evaluation of future HARI needs in order to scope and focus the HARI risk research plan. The objective was to provide a systematic understanding of the critical factors associated with effective HARI that will be necessary to achieve the future mission goals for near- and deep-space exploration. Future mission goals are specified by NASA Design Reference Missions (DRMs) that are pertinent to the HRP. The outcome of this evaluation is a set of NASA-relevant HARI tasks, factors, and interactions required for exploration-class missions

    Telerobotics Workstation (TRWS) for Deep Space Habitats

    Get PDF
    On medium- to long-duration human spaceflight missions, latency in communications from Earth could reduce efficiency or hinder local operations, control, and monitoring of the various mission vehicles and other elements. Regardless of the degree of autonomy of any one particular element, a means of monitoring and controlling the elements in real time based on mission needs would increase efficiency and response times for their operation. Since human crews would be present locally, a local means for monitoring and controlling all the various mission elements is needed, particularly for robotic elements where response to interesting scientific features in the environment might need near- instantaneous manipulation and control. One of the elements proposed for medium- and long-duration human spaceflight missions, the Deep Space Habitat (DSH), is intended to be used as a remote residence and working volume for human crews. The proposed solution for local monitoring and control would be to provide a workstation within the DSH where local crews can operate local vehicles and robotic elements with little to no latency. The Telerobotics Workstation (TRWS) is a multi-display computer workstation mounted in a dedicated location within the DSH that can be adjusted for a variety of configurations as required. From an Intra-Vehicular Activity (IVA) location, the TRWS uses the Robot Application Programming Interface Delegate (RAPID) control environment through the local network to remotely monitor and control vehicles and robotic assets located outside the pressurized volume in the immediate vicinity or at low-latency distances from the habitat. The multiple display area of the TRWS allows the crew to have numerous windows open with live video feeds, control windows, and data browsers, as well as local monitoring and control of the DSH and associated systems

    Tele-Operated Lunar Rover Navigation Using Lidar

    Get PDF
    Near real-time tele-operated driving on the lunar surface remains constrained by bandwidth and signal latency despite the Moon s relative proximity. As part of our work within NASA s Human-Robotic Systems Project (HRS), we have developed a stand-alone modular LIDAR based safeguarded tele-operation system of hardware, middleware, navigation software and user interface. The system has been installed and tested on two distinct NASA rovers-JSC s Centaur2 lunar rover prototype and ARC s KRex research rover- and tested over several kilometers of tele-operated driving at average sustained speeds of 0.15 - 0.25 m/s around rocks, slopes and simulated lunar craters using a deliberately constrained telemetry link. The navigation system builds onboard terrain and hazard maps, returning highest priority sections to the off-board operator as permitted by bandwidth availability. It also analyzes hazard maps onboard and can stop the vehicle prior to contacting hazards. It is robust to severe pose errors and uses a novel scan alignment algorithm to compensate for attitude and elevation errors
    corecore